Experimental Learning of Causal Models with Latent Variables

نویسندگان

  • Sam Maes
  • Stijn Meganck
  • Philippe Leray
چکیده

This article discusses graphical models that can handle latent variables without explicitly modeling them quantitatively. There exist several paradigms for such problem domains. Two of them are semi-Markovian causal models and maximal ancestral graphs. Applying these techniques to a problem domain consists of several steps, typically: structure learning from observational and experimental data, parameter learning, probabilistic inference, and, quantitative causal inference. A problem is that research in each of the existing approaches only focuses on one or a few of all the steps involved in the process of modeling a problem including latent variables. In other work we have investigated the integral process from observational and experimental data unto different types of efficient inference. The goal of this article is to focus on learning the structure of causal models in the presence of latent variables from a combination of observational and experimental data. Semi-Markovian causal models (SMCMs) are an approach developed by Pearl and Tian [3, 6]. They are specifically suited for performing quantitative causal inference in the presence of latent variables. However, at this time no efficient parametrisation of such models is provided and there are no techniques for performing efficient probabilistic inference. Furthermore there are no techniques to learn these models from data issued from observations, experiments or both. Maximal ancestral graphs (MAGs) are an approach developed by Richardson et al. [4]. They are specifically suited for structure learning in the presence of latent variables from observational data. However, the techniques only learn up to Markov equivalence and provide no clues on which additional experiments to perform in order to obtain the fully oriented causal graph. See [1, 2] for that type of results for Bayesian networks without latent variables. Furthermore, as of yet no parametrisation for discrete variables is provided for MAGs and no techniques for probabilistic inference have been developed. There is some work on algorithms for causal inference, but it is restricted to causal inference

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Causal Graphical Models with Latent Variables: Learning and Inference

Several paradigms exist for modeling causal graphical models for discrete variables that can handle latent variables without explicitly modeling them quantitatively. Applying them to a problem domain consists of different steps: structure learning, parameter learning and using them for probabilistic or causal inference. We discuss two well-known formalisms, namely semi-Markovian causal models a...

متن کامل

Pairwise Cluster Comparison for Learning Latent Variable Models

Learning a latent variable model (LVM) exploits values of the measured variables as manifested in the data to causal discovery. Because the challenge in learning an LVM is similar to that faced in unsupervised learning, where the number of clusters and the classes that are represented by these clusters are unknown, we link causal discovery and clustering. We propose the concept of pairwise clus...

متن کامل

Invariant Gaussian Process Latent Variable Models and Application in Causal Discovery

In nonlinear latent variable models or dynamic models, if we consider the latent variables as confounders (common causes), the noise dependencies imply further relations between the observed variables. Such models are then closely related to causal discovery in the presence of nonlinear confounders, which is a challenging problem. However, generally in such models the observation noise is assum...

متن کامل

Discussion of "Learning Equivalence Classes of Acyclic Models with Latent and Selection Variables from Multiple Datasets with Overlapping Variables"

In automated causal discovery, the constraint-based approach seeks to learn an (equivalence) class of causal structures (with possibly latent variables and/or selection variables) that are compatible (according to some assumptions, usually the causal Markov and faithfulness assumptions) with the conditional dependence and independence relations found in data. In the paper under discussion, Till...

متن کامل

Discovery of Causal Models that Contain Latent Variables Through Bayesian Scoring of Independence Constraints

Discovering causal structure from observational data in the presence of latent variables remains an active research area. Constraint-based causal discovery algorithms are relatively efficient at discovering such causal models from data using independence tests. Typically, however, they derive and output only one such model. In contrast, Bayesian methods can generate and probabilistically score ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006